19 research outputs found

    A new priority rule cloud scheduling technique that utilizes gaps to increase the efficiency of jobs distribution

    Get PDF
    In recent years, the concept of cloud computing has been gaining traction to provide dynamically increasing access to shared computing resources (software and hardware) via the internet. It’s no secret that cloud computing’s ability to supply mission-critical services has made job scheduling a hot subject in the industry right now. However, the efficient utilization of these cloud resources has been a challenge, often resulting in wastage or degraded service performance due to poor scheduling. To solve this issue, existing research has been focused on queue-based job scheduling techniques, where jobs are scheduled based on specific deadlines or job lengths. To overcome this challenge, numerous researchers have focused on improving existing Priority Rule (PR) cloud schedulers by developing dynamic scheduling algorithms, but they have fallen short of meeting user satisfaction, such as flowtime, makespan, and total tardiness. These are the limitations of the current implementation of existing Priority Rule (PR) schedulers, mainly caused by blocking made by jobs at the head of the queue. These limitations lead to the poor performance of cloud-based mobile applications and other cloud services. To address this issue, the main objective of this research is to improve the existing PR cloud schedulers by developing a new dynamic scheduling algorithm by manipulating the gaps in the cloud job schedule. In this thesis, first a Priority-Based Fair Scheduling (PBFS) algorithm has been introduced to schedule jobs so that jobs get access to the required resources at optimal times. Then, a backfilling strategy called Shortest Gap Priority-Based Fair Scheduling (SG-PBFS) is proposed that attempts to manipulate the gaps in the schedule of cloud jobs. Finally, the performance evaluation demonstrates that the proposed SG-PBFS algorithm outperforms SG-SJF, SG-LJF, SG-FCFS, SG-EDF, and SG-(MAX-MIN) in terms of flow time, makespan time, and total tardiness, which conclusively demonstrates its effectiveness. The experiment result shows that for 500 jobs, SG-PBFS flow time, makespan time, and tardiness time are 9%, 4%, and 7% less than PBFS gradually

    Edge assisted crime prediction and evaluation framework for machine learning algorithms

    Get PDF
    The growing global populations, particularly in major cities, have created new problems, notably in terms of public safety regulation and optimization. As a result, in this paper, a strategy is provided for predicting crime occurrences in a city based on historical events and demographic observation. In particular, this study proposes a crime prediction and evaluation framework for machine learning algorithms of the network edge. Thus, a complete analysis of four distinct sorts of crimes, such as murder, rapid trial, repression of women and children, and narcotics, validates the efficiency of the proposed framework. The complete study and implementation process have shown a visual representation of crime in various areas of country. The total work is completed by the selection, assessment, and implementation of the Machine Learning (ML) model, and finally, proposed the crime prediction. Criminal risk is predicted using classification models for a particular time interval and place. To anticipate occurrences, ML methods such as Decision Trees, Neural Networks, K-Nearest Neighbors, and Impact Learning are being utilized, and their performance is compared based on the data processing and modification used. A maximum accuracy of 81% is obtained for Decision Tree algorithm during the prediction of crime. The findings demonstrate that employing Machine Learning techniques aids in the prediction of criminal events, which has aided in the enhancement of public security

    Comparative study on job scheduling using priority rule and machine learning

    Get PDF
    Cloud computing is a potential technique for running resource-intensive applications on a wide scale. Implementation of a suitable scheduling algorithm is critical in order to properly use cloud resources. Shortest Job First (SJF) and Longest Job First (LJF) are two well-known corporate schedulers that are now used to manage Cloud tasks. Although such algorithms are basic and straightforward to develop, they are limited in their ability to deal with the dynamic nature of the Cloud. In our research, we have demonstrated a comparison in our investigations between the priority algorithm performance matrices and the machine learning algorithm. In cloudsim and Google Colab, we finished our experiment. CPU time, turnaround time, wall clock time, waiting time, and execution start time are all included in this research. For time and space sharing mode, the cloudlet is assigned to the CPU. VM is allocated in space-sharing mode all the time. We’ve achieved better for SJF and a decent machine learning algorithm outcome as well

    Optimal safety planning and driving decision-making for multiple autonomous vehicles: A learning based approach.

    Get PDF
    In the early diffusion stage of autonomous vehicle systems, the controlling of vehicles through exacting decision-making to reduce the number of collisions is a major problem. This paper offers a DRL-based safety planning decision-making scheme in an emergency that leads to both the first and multiple collisions. Firstly, the lane-changing process and braking method are thoroughly analyzed, taking into account the critical aspects of developing an autonomous driving safety scheme. Secondly, we propose a DRL strategy that specifies the optimum driving techniques. We use a multiple-goal reward system to balance the accomplishment rewards from cooperative and competitive approaches, accident severity, and passenger comfort. Thirdly, the deep deterministic policy gradient (DDPG), a basic actor-critic (AC) technique, is used to mitigate the numerous collision problems. This approach can improve the efficacy of the optimal strategy while remaining stable for ongoing control mechanisms. In an emergency, the agent car can adapt optimum driving behaviors to enhance driving safety when adequately trained strategies. Extensive simulations show our concept’s effectiveness and worth in learning efficiency, decision accuracy, and safety

    Computer-aided system for extending the performance of diabetes analysis and prediction

    Get PDF
    Every year, diabetes causes health difficulties for hundreds of millions of individuals throughout the world. Patients’ medical records may be utilized to quantify symptoms, physical characteristics, and clinical laboratory test data, which may then be utilized to undertake biostatistics analysis to uncover patterns or characteristics that are now undetected. In this work, we have used six machine learning algorithms to give the prediction of diabetes patients and the reason for diabetes are illustrated in percentage using pie charts. The machine learning algorithms used to predict the risks of Type 2 diabetes. User can self-assess their diabetes risk once the model has been trained. Based on the experimental results in AdaBoost Classifier's, the accuracy achieved is almost 98 percent

    Impact Learning: A Learning Method from Features Impact and Competition

    Get PDF
    Machine learning is the study of computer algorithms that can automatically improve based on data and experience. Machine learning algorithms build a model from sample data, called training data, to make predictions or judgments without being explicitly programmed to do so. A variety of wellknown machine learning algorithms have been developed for use in the field of computer science to analyze data. This paper introduced a new machine learning algorithm called impact learning. Impact learning is a supervised learning algorithm that can be consolidated in both classification and regression problems. It can furthermore manifest its superiority in analyzing competitive data. This algorithm is remarkable for learning from the competitive situation and the competition comes from the effects of autonomous features. It is prepared by the impacts of the highlights from the intrinsic rate of natural increase (RNI). We, moreover, manifest the prevalence of the impact learning over the conventional machine learning algorithm

    A review on job scheduling technique in cloud computing and priority rule based intelligent framework

    Get PDF
    In recent years, the concept of cloud computing has been gaining traction to provide dynamically increasing access to shared computing resources (software and hardware) via the internet. It’s not secret that cloud computing’s ability to supply mission-critical services has made job scheduling a hot subject in the industry right now. Cloud resources may be wasted, or in-service performance may suffer because of under-utilization or over-utilization, respectively, due to poor scheduling. Various strategies from the literature are examined in this research in order to give procedures for the planning and performance of Job Scheduling techniques (JST) in cloud computing. To begin, we look at and tabulate the existing JST that is linked to cloud and grid computing. The present successes are then thoroughly reviewed, difficulties and flows are recognized, and intelligent solutions are devised to take advantage of the proposed taxonomy. To bridge the gaps between present investigations, this paper also seeks to provide readers with a conceptual framework, where we proposed an effective job scheduling technique in cloud computing. These findings are intended to provide academics and policymakers with information about the advantages of a more efficient cloud computing setup. In cloud computing, fair job scheduling is most important. We proposed a priority-based scheduling technique to ensure fair job scheduling. Finally, the open research questions raised in this article will create a path for the implementation of an effective job scheduling strateg

    Transfer learning for sentiment analysis using bert based supervised fine-tuning

    Get PDF
    The growth of the Internet has expanded the amount of data expressed by users across multiple platforms. The availability of these different worldviews and individuals’ emotions em-powers sentiment analysis. However, sentiment analysis becomes even more challenging due to a scarcity of standardized labeled data in the Bangla NLP domain. The majority of the existing Bangla research has relied on models of deep learning that significantly focus on context-independent word embeddings, such as Word2Vec, GloVe, and fastText, in which each word has a fixed representation irrespective of its context. Meanwhile, context-based pre-trained language models such as BERT have recently revolutionized the state of natural language processing. In this work, we utilized BERT’s transfer learning ability to a deep integrated model CNN-BiLSTM for enhanced performance of decision-making in sentiment analysis. In addition, we also introduced the ability of transfer learning to classical machine learning algorithms for the performance comparison of CNN-BiLSTM. Additionally, we explore various word embedding techniques, such as Word2Vec, GloVe, and fastText, and compare their performance to the BERT transfer learning strategy. As a result, we have shown a state-of-the-art binary classification performance for Bangla sentiment analysis that significantly outperforms all embedding and algorithms

    Multiple vehicle cooperation and collision avoidance in automated vehicles : Survey and an AI‑enabled conceptual framework

    Get PDF
    Prospective customers are becoming more concerned about safety and comfort as the automobile industry swings toward automated vehicles (AVs). A comprehensive evaluation of recent AVs collision data indicates that modern automated driving systems are prone to rear-end collisions, usually leading to multiple-vehicle collisions. Moreover, most investigations into severe traffic conditions are confined to single-vehicle collisions. This work reviewed diverse techniques of existing literature to provide planning procedures for multiple vehicle cooperation and collision avoidance (MVCCA) strategies in AVs while also considering their performance and social impact viewpoints. Firstly, we investigate and tabulate the existing MVCCA techniques associated with single-vehicle collision avoidance perspectives. Then, current achievements are extensively evaluated, challenges and flows are identified, and remedies are intelligently formed to exploit a taxonomy. This paper also aims to give readers an AI-enabled conceptual framework and a decision-making model with a concrete structure of the training network settings to bridge the gaps between current investigations. These findings are intended to shed insight into the benefits of the greater efficiency of AVs set-up for academics and policymakers. Lastly, the open research issues discussed in this survey will pave the way for the actual implementation of driverless automated traffic systems

    AI powered asthma prediction towards treatment formulation : An android app approach

    Get PDF
    Asthma is a disease which attacks the lungs and that affects people of all ages. Asthma prediction is crucial since many individuals already have asthma and increasing asthma patients is continuous. Machine learning (ML) has been demonstrated to help individuals make judgments and predictions based on vast amounts of data. Because Android applications are widely available, it will be highly beneficial to individuals if they can receive therapy through a simple app. In this study, the machine learning approach is utilized to determine whether or not a person is affected by asthma. Besides, an android application is being cre-ated to give therapy based on machine learning predictions. To collect data, we enlisted the help of 4,500 people. We collect information on 23 asthma-related characteristics. We utilized eight robust machine learning algorithms to analyze this dataset. We found that the Decision tree classifier had the best performance, out of the eight algorithms, with an accuracy of 87%. TensorFlow is utilized to integrate machine learning with an Android application. We accomplished asthma therapy using an Android application developed in Java and running on the Android Studio platform
    corecore